Utilising Asymmetric Parallelism in Multi-Core MCS Implemented via Cyclic Executives

نویسندگان

  • Tom Fleming
  • Alan Burns
چکیده

It is becoming more and more evident that future and near future real-time systems will be required to execute on powerful multi-core hardware platforms. This requirement is born initially of necessity, as single-core architectures are being abandoned for multi-core, however, many are seeing the opportunity to utilise these powerful platforms to combine previously federated functionality. By integrating functionality onto the same hardware platform, inevitably, the situation arises where applications of differing levels of criticality must execute alongside each other. This poses the challenge of providing the suitable level of separation to the higher criticality work, such that it may be guaranteed that no lower criticality work may effect its execution. Providing such mechanisms is fundamental for the certification and validation of Mixed Criticality systems. Complicating the matter further is the simple usage of multi-core platforms themselves. Typical real-time applications of a high level of criticality are, and have been for a long time, residents of the single-core/processor domain. The introduction of multiple cores brings problems such as interference between tasks, controlling memory access and inter-core communications to name a few. In addition to these issues, when considering a mixed criticality system, one must also ensure the sufficient isolation of criticality levels, such that non of the above factors may have impact from a lower to a higher criticality level. For example, a lower criticality application may not cause contention on the shared bus and adversely effect the execution of a higher criticality task such that its execution cannot be guaranteed within safe bounds. However, multi-core platforms promise improvement over previous single-core architectures. In addition to increased processing capacity, they offer improved power usage and thermal output due to lower individual clock speeds. Finally, they provide a platform amenable to parallelised applications which may benefit lower criticality work, particularly that involved in tasks such as run-time simulation and image processing. With all these advantages and drawbacks in mind, we propose a design optimisation to, in some way, ease the transition from single-core to multi-core architectures. Our work is based on the well known Cyclic Executive paradigm [1] where a barrier protocol [5] is used to strictly separate the execution of differing levels of criticality. We utilise this separate notion of criticality level execution to pursue the goal of reducing the number of cores the highest criticality (HI) work may execute on. The aim being that with a reduction in the number of cores, simpler verification with fewer overheads will be available. We illustrate this optimisation and its impact with an example and provide experimental work investigating the reduction in the number of cores for high criticality work using synthetic task sets. This work aims to present the reasoning and advantages of reducing the number of cores for higher criticality level execution. The remainder of this paper is structured as follows: Section II described the system model and reviews prior work on ILP, Section III presents and discusses the notion of limiting the number of cores for HI criticality execution, Section IV provides some insight via an example, Section V provides an evaluation and Section VI poses some concluding remarks.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semi-partitioned Cyclic Executives for Mixed Criticality Systems

In a cyclic executive, a series of frames are executed in sequence; once the series is complete the sequence is repeated. Within each frame, units of computation are executed, again in sequence. In implementing cyclic executives upon multi-core platforms, there is advantage in coordinating the execution of the cores so that frames are released at the same time across all cores. For mixed critic...

متن کامل

Selecting the Best Tridiagonal System Solver Projected on Multi-Core CPU and GPU Platforms

Nowadays multicore processors and graphics cards are commodity hardware that can be found in personal computers. Both CPU and GPU are capable of performing high-end computations. In this paper we present and compare parallel implementations of two tridiagonal system solvers. We analyze the cyclic reduction method, as an example of fine-grained parallelism, and Bondeli’s algorithm, as a coarse-g...

متن کامل

Enabling and Scaling Matrix Computations on Heterogeneous Multi-Core and Multi-GPU Systems

We present a new approach to utilizing all CPU cores and all GPUs on heterogeneous multicore and multi-GPU systems to support dense matrix computations efficiently. The main idea is that we treat a heterogeneous system as a distributedmemory machine, and use a heterogeneous multi-level block cyclic distribution method to allocate data to the host and multiple GPUs to minimize communication. We ...

متن کامل

Efficient Support for Matrix Computations on Heterogeneous Multi-core and Multi-GPU Architectures

We present a new methodology for utilizing all CPU cores and all GPUs on a heterogeneous multicore and multi-GPU system to support matrix computations efficiently. Our approach is able to achieve four objectives: a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our main idea is to treat the heterogeneous system as a distributed-memory machine...

متن کامل

Design Trade-offs for Memory Level Parallelism on an Asymmetric Multicore System

Asymmetric Multicore Processors (AMP) offer a unique opportunity to integrate many kinds of cores together with each core optimized for different uses. However, the impact of techniques for exploiting high Memory Level Parallelism (MLP) on core specialization and selection on AMPs has not been investigated. Extracting high memory-level parallelism is essential to tolerate long memory latencies,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016